
数学HW3
Exercise 1
Problem Analysis
The objective is to find the coefficients $(\alpha, \beta, \gamma)$ for the finite difference formula:
$$
D_{h}f(\overline{x})=\frac{\alpha f(\overline{x})+\beta f(\overline{x}-h)+\gamma f(\overline{x}-2h)}{h} \quad
$$
The analysis begins by substituting the Taylor series expansions for $f(\overline{x}-h)$ and $f(\overline{x}-2h)$ around the point $\overline{x}$ into the formula.
The expansions are:
$$
f(\overline{x}-h) = f(\overline{x}) - hf’(\overline{x}) + \frac{h^2}{2}f’’(\overline{x}) - \frac{h^3}{6}f’’’(\overline{x}) + O(h^4)
$$
$$
f(\overline{x}-2h) = f(\overline{x}) - 2hf’(\overline{x}) + 2h^2f’’(\overline{x}) - \frac{4h^3}{3}f’’’(\overline{x}) + O(h^4)
$$
Substituting these into the formula and grouping terms by derivatives of $f(\overline{x})$ results in:
$$
D_{h}f(\overline{x}) = \frac{\alpha + \beta + \gamma}{h} f(\overline{x}) + (-\beta - 2\gamma) f’(\overline{x}) + \left( \frac{\beta}{2} + 2\gamma \right)h f’’(\overline{x}) + O(h^2)
$$
To approximate $f’(\overline{x})$, the coefficient of $f’(\overline{x})$ must be $1$, and the coefficient of $f(\overline{x})$ must be $0$. This yields two necessary equations:
- $\alpha + \beta + \gamma = 0$
- $-\beta - 2\gamma = 1$
a) Solution for First-Order Accuracy
For the formula to be of order 1, the two fundamental equations must be satisfied.
From equation (2), $\beta$ can be expressed in terms of $\gamma$:
$$
\beta = -1 - 2\gamma
$$
Substituting this into equation (1):
$$
\alpha + (-1 - 2\gamma) + \gamma = 0 \implies \alpha = 1 + \gamma
$$
Therefore, the family of coefficients $(\alpha, \beta, \gamma)$ that provides at least first-order accuracy is given by:
$$
(\alpha, \beta, \gamma) = (1+\gamma, -1-2\gamma, \gamma), \quad \forall \gamma \in \mathbb{R}
$$
b) Solution for Second-Order Accuracy
To achieve second-order accuracy, the truncation error must be $O(h^2)$. This requires the coefficient of the $h$ term in the error expansion to also be zero.
$$\frac{\beta}{2} + 2\gamma = 0$$
This creates a system of three linear equations to be solved for the unique values of $(\alpha, \beta, \gamma)$:
- $\alpha + \beta + \gamma = 0$
- $-\beta - 2\gamma = 1$
- $\frac{\beta}{2} + 2\gamma = 0$
From equation (3), we find that $\beta = -4\gamma$. Substituting this into equation (2):
$$
-(-4\gamma) - 2\gamma = 1 \implies 2\gamma = 1 \implies \gamma = \frac{1}{2}
$$
With the value for $\gamma$, $\beta$ can be found:
$$
\beta = -4 \left( \frac{1}{2} \right) = -2
$$
Finally, substituting $\beta$ and $\gamma$ into equation (1) yields $\alpha$:
$$
\alpha + (-2) + \frac{1}{2} = 0 \implies \alpha = \frac{3}{2}
$$
The unique values which give a formula of order 2 are:
$$
\left(\alpha, \beta, \gamma\right) = \left(\frac{3}{2}, -2, \frac{1}{2}\right)
$$
Exercise 2
a) Midpoint Formula Calculation
The first task is to compute the integral of the function $f(x)=e^{-x}sin(x)$ on the interval $[a,b]=[0,2]$. The computation uses the composite midpoint formula with $M=10$ sub-intervals.
The resulting numerical approximation of the integral is 0.468592. The exact integral is 0.466630. So the error is 0.001962.
b) Convergence of the Midpoint Formula
This section analyzes the convergence of the composite midpoint formula for the same function and interval. The number of sub-intervals is varied as $M=10^{1},10^{2},10^{3},…10^{5}$.
The absolute error, $|I(f)-Q_{h}^{pm}(f)|$, is computed for each corresponding step size, $h=(b-a)/M$. This error is the difference between the numerical result and the exact value of the integral, $I=(1-e^{-2}(sin(2)+cos(2)))/2$.
A graph of the error versus the step size $h$ is then created using a logarithmic scale for both axes. In such a plot, the slope of the resulting line corresponds to the order of convergence.
**Plot Analysis**The plot displays a straight line with a negative slope. A linear fit to the logarithmic data reveals a slope of approximately 2. This visually confirms that the method exhibits 2nd-order convergence, which aligns with the theoretical error bounds for the composite midpoint rule.
c) Convergence of Trapezoidal and Simpson’s Formulas
The analysis is repeated using the composite trapezoidal formula and the composite Simpson’s formula. The errors for both methods, $|I(f)-Q_{h}^{trap}(f)|$ and $|I(f)-Q_{h}^{simp}(f)|$, are plotted on the same graph in logarithmic scale.
**Plot Analysis and Comparison**The resulting graph contains two distinct straight lines, one for each method.
- Trapezoidal Rule: The error line for the trapezoidal rule has a slope of approximately 2. This confirms its theoretical 2nd-order convergence.
- Simpson’s Rule: The error line for Simpson’s rule is significantly steeper, with a slope of approximately 4. This demonstrates its theoretical 4th-order convergence.
A comparison of the results shows that for any given step size $h$, the error from Simpson’s rule is substantially smaller than the error from the trapezoidal rule, highlighting the superior accuracy of the higher-order method for smooth functions.
d) Analysis for a Non-Smooth Function
Finally, the convergence analysis is repeated for all three methods on a new function, $f(x)=\sqrt{|x|^{3}}$, over the interval $[a,b]=[-2,2]$. The exact value for this integral is $I=\frac{16}{5}\sqrt{2}$.
Comments on the Obtained Results
The function $f(x)=\sqrt{|x|^{3}}$ is not sufficiently smooth at the point $x=0$ within the integration interval. While the function and its first derivative are continuous, its second derivative, $f’’(x) \propto |x|^{-1/2}$, is unbounded at $x=0$.
The theoretical error estimates that predict 2nd and 4th-order convergence for these methods are based on the assumption that the function’s higher-order derivatives are continuous and bounded. Since this condition is violated, a degradation in the observed convergence rates is expected.
**Plot Analysis** The log-log plot for this non-smooth function reveals the following: * The convergence rates for all three methods are significantly reduced. * The slopes of the error lines for the midpoint, trapezoidal, and Simpson's rules are all approximately **1.5**. * The high-order accuracy advantage of Simpson's rule is lost; its performance becomes comparable to the other two lower-order methods. This result demonstrates that the practical performance and convergence rate of a numerical integration method are fundamentally limited by the smoothness of the function being integrated.